Adversarial training
Training examples given to a model that are close to the original training examples - i.e. perturbations that are indistinguishable by humans - but cause the model to misclassify.
Goodfellow proposed the [[fast gradient sign method]] to generate adversarial examples quickly. Under that framework, with noise, you basically smooth out your regressor predictions into a -neighborhood.
[fast gradient sign method]: fast gradient sign method "fast gradient sign method"
Backlinks
scalable-uncertainties-from-deep-ensembles
2. [[Adversarial training]] to smooth the predictive distribution